146 research outputs found

    Interviewer Effects on Nonresponse

    Get PDF
    In face-to-face surveys interviewers play a crucial role in making contact with and gaining cooperation from sample units. While some analyses investigate the influence of interviewers on nonresponse, they are typically restricted to single-country studies. However, interviewer training, contacting and cooperation strategies as well as survey climates may differ across countries. Combining call-record data from the European Social Survey (ESS) with data from a detailed interviewer questionnaire on attitudes and doorstep behavior we find systematic country differences in nonresponse processes, which can in part be explained by differences in interviewer characteristics, such as contacting strategies and avowed doorstep behavior.

    Measuring Interviewer Characteristics Pertinent to Social Surveys: A Conceptual Framework

    Get PDF
    Interviewer effects are found across all types of interviewer-mediated surveys crossing disciplines and countries. While studies describing interviewer effects are manifold, identifying characteristics explaining these effects has proven difficult due to a lack of data on the interviewers. This paper proposes a conceptual framework of interviewer characteristics for explaining interviewer effects and its operationalization in an interviewer questionnaire. The framework encompasses four dimensions of interviewer characteristics: interviewer attitudes, interviewers’ own behaviour, interviewers’ experience with measurements, and interviewers’ expectations. Our analyses of the data collected from interviewers working on the fourth wave of SHARE Germany show that the above measures distinguish well between interviewers

    Zum gesellschaftlichen Umgang mit der Corona-Pandemie : Ergebnisse der Mannheimer Corona-Studie

    Get PDF

    Modeling group-specific interviewer effects on survey participation using separate coding for random slopes in multilevel models

    Get PDF
    Despite its importance in terms of survey participation, the literature is sparse on how face-to-face interviewers differentially affect specific groups of sample units. In this paper, we demonstrate how an alternative parametrization of the random components in multilevel models, so-called separate coding, delivers valuable insights into differential interviewer effects for specific groups of sample members. At the example of a face-to-face recruitment interview for a probability-based online panel, we detect small interviewer effects regarding survey participation for non-Internet households, whereas we find sizable interviewer effects for Internet households. Based on the proposed variance decomposition, we derive practical guidance for survey practitioners to address such differential interviewer effects

    Modelling Group-Specific Interviewer Effects on Nonresponse Using Separate Coding for Random Slopes in Multilevel Models

    Get PDF
    To enhance response among underrepresented groups and hence, to increase response rates and to decrease potential nonresponse bias survey practitioners often use interviewers in population surveys (Heerwegh, 2009). While interviewers tend to increase overall response rates in surveys (see Heerwegh, 2009), research on the determinants of nonresponse have also identified human interviewers as one reason for variations in response rates (see for examples Couper & Groves, 1992; Durrant, Groves, Staetsky, & Steele, 2010; Durrant & Steele, 2009; Hox & de Leeuw, 2002; Loosveldt & Beullens, 2014; West & Blom, 2016). In addition, research on interviewer effects indicates that interviewers introduce nonresponse bias, if interviewers systematically differ in their success in obtaining response from specific respondent groups (see West, Kreuter, & Jaenichen, 2013; West & Olson, 2010). Therefore, interviewers might be a source of selective nonresponse in surveys. Interviewers might also differentially contribute to selective nonresponse in surveys and hence, potential nonresponse bias, when interviewer effects are correlated with characteristics of the approached sample units (for an example see Loosveldt & Beullens, 2014). Multilevel models including dummies in the random part of the model to distinguish between respondent groups are commonly used to investigate whether interviewer effects on nonresponse differ across specific respondent groups (see Loosveldt & Beullens, 2014). When dummy coding, which is also referred to as contrast coding (Jones, 2013), are included as random components in multilevel models for interviewers effects, the obtained variance estimates indicate to what extent the contrast between respondent groups varies across interviewers. Yet, such parameterization does not directly yield insight on the size of interviewer effects for specific respondent groups. Surveys with large imbalances among respondent groups gain from an investigation of the variation of interviewer effect sizes on nonresponse, as one gains insights on whether the interviewer effect size is the same for specific respondent groups. The importance of the interviewer effect size for specific groups of respondents lies in its prediction of the effectiveness of interviewer-related fieldwork strategies (for examples on liking, matching, or prioritizing respondents with interviewers see Durrant et al., 2010; Peytchev, Riley, Rosen, Murphy, & Lindblad, 2010; Pickery & Loosveldt, 2002, 2004) and thus, a effective mitigation of potential nonresponse bias. Consequently, understanding group-specific interviewer effect sizes can aide the efficiency of respondent recruitment, because we then understand why some interviewer-related fieldwork strategies have great impact on some respondent group’s participation while other strategies have little effect. To obtain information on differences in interviewer effect size, we propose to use an alternative coding strategy, so-called separate coding in multilevel models with random slopes (for examples see Jones, 2013; Verbeke & Molenberghs, 2000, ch. 12.1). In case of separate coding, every variable represents a direct estimate of the interviewer effects for specific respondent groups (rather than the contrast with a reference category). Investigating nonresponse during the recruitment of a probability-based online panel separately for persons with and without prior internet access (data used from the German Internet Panel, see Blom et al., 2017), we detect that the size of the interviewer effect differs between the two respondent groups. While we discover no interviewer effects on nonresponse for persons without internet access (offliners), we find sizable interviewer effects for persons with internet access (onliners). In addition, we identify interviewer characteristics that explain this group-specific nonresponse. Our results demonstrate that the implementation of interviewer-related fieldwork strategies might help to increase response rates among onliners, as for onliners the interviewer effect size was relatively large compared to the interviewer effect size for offliners

    Response quality in nonprobability and probability-based online panels

    Get PDF
    Recent years have seen a growing number of studies investigating the accuracy of nonprobability online panels; however, response quality in nonprobability online panels has not yet received much attention. To fill this gap, we investigate response quality in a comprehensive study of seven nonprobability online panels and three probability-based online panels with identical fieldwork periods and questionnaires in Germany. Three response quality indicators typically associated with survey satisficing are assessed: straight-lining in grid questions, item nonresponse, and midpoint selection in visual design experiments. Our results show that there is significantly more straight-lining in the nonprobability online panels than in the probability-based online panels. However, contrary to our expectations, there is no generalizable difference between nonprobability online panels and probability-based online panels with respect to item nonresponse. Finally, neither respondents in nonprobability online panels nor respondents in probability-based online panels are significantly affected by the visual design of the midpoint of the answer scale

    Recruiting a Probability-Based Online Panel via Postal Mail: Experimental Evidence

    Get PDF
    Once recruited, probability-based online panels have proven to enable high-quality and high-frequency data collection. In ever faster-paced societies and, recently, in times of pandemic lockdowns, such online survey infrastructures are invaluable to social research. In absence of email sampling frames, one way of recruiting such a panel is via postal mail. However, few studies have examined how to best approach and then transition sample members from the initial postal mail contact to the online panel registration. To fill this gap, we implemented a large-scale experiment in the recruitment of the 2018 sample of the German Internet Panel (GIP) varying panel recruitment designs in four experimental conditions: online-only, concurrent mode, online-first, and paper-first. Our results show that the online-only design delivers higher online panel registration rates than the other recruitment designs. In addition, all experimental conditions led to similarly representative samples on key socio-demographic characteristics

    Explaining Interviewer Effects on Survey Unit Nonresponse: A Cross-Survey Analysis

    Get PDF
    In interviewer-administered surveys, interviewers are involved in nearly all steps of the survey implementation. However, besides many positive aspects of interviewers’ involvement, they are – intentionally or unintentionally – a potential source of survey errors. In recent decades, a large body of literature has accumulated about measuring and explaining interviewer effects on survey unit nonresponse. Recently, West and Blom (2017) have published a research synthesis on factors explaining interviewer effects on various sources of survey error, including survey unit nonresponse. They find that previous research reports great variability across surveys in the significance and even direction of predictors of interviewer effects on survey unit nonresponse. This variability in findings across surveys may be due to a lack of consistency in key characteristics of the surveys examined, such as the group of interviewers employed, the survey organizations managing the interviewers, the sampling frame used, and the populations, as well as time periods, observed. In addition, the explanatory variables available to the researchers who examine interviewer effects on survey nonresponse differ largely across surveys and may thus influence the results. The diversity in findings, survey characteristics, and explanatory variables available for analyses call for a more orchestrated effort in explaining interviewer effects on survey unit nonresponse. Our paper fills this gap as our analyses are based on four German surveys with a high level of consistency across the surveys: GIP 2012, PIAAC, SHARE, and GIP 2014. The four surveys were conducted face-to-face in approximately the same time period in Germany. They were administered through the same survey organization with the same pool of interviewers. In addition, we were able to use the same area control variables and identical explanatory variables at the interviewer level. Despite the numerous similarities across the four surveys, our results show high variability of interviewer characteristics that explain interviewer effects on survey unit nonresponse across the surveys. In addition, we find that the interviewers employed in the four surveys are rather similar with regard to most of their socio-demographic characteristics, work experience and working hours. Furthermore, the interviewers are similar with regard to their behavior and reporting about deviations from standardized interviewing techniques, how they achieve response and their reasons for working as an interviewer. The results, therefore, suggest that other differences – such as topic, sponsor, research team, or interviewer training – between the four surveys might explain the identified interviewer effects on survey unit nonresponse

    How does switching a Probability-Based Online Panel to a Smartphone-Optimized Design Affect

    Get PDF
    In recent years, an increasing number of online panel participants respond to surveys on smartphones. As a result, survey practitioners are faced with a difficult decision: Either they hold the questionnaire design constant over time and thus stay with the original desktop-optimized design; or they switch to a smartphone-optimized format and thus accommodate respondents who prefer participating on their smartphone. Even though this decision is all but trivial, little research thus far has been conducted on the effect of such an adjustment on panel members’ survey participation and device use. We report on the switch to a smartphone-optimized design in the German Internet Panel (GIP), an ongoing probability-based online panel that started in 2012 with a desktop-optimized design. We investigate whether the introduction of a smartphone-optimized design affected overall response rates and smartphone use in the GIP. Moreover, we examine the effect of different ways of announcing the introduction of the smartphone-optimized design in the invitation email on survey participation using a smartphone
    • …
    corecore